Quick Tutorial on Sampling

Sampling is easiest to understand in one dimension in time. Let us use a simple cosine function to demonstrate.

We will configure the cosine function at 10 Hertz and study one second of its output.

The sampling interval for the function also needs to be provided. This is the time elapsed between samples.

Conventional sampling (Nyquist Sampling Theorem) states that the minimum sampling interval required to capture a given frequency event should be no less than double the frequency. In other words, sample twenty times per second if your want to digitize a signal that cycles ten times per second.

We can plot the 10 Hz cosine function using the code below...

We do see about ten oscilations but the cosine function seems a little pointy, no? The short explanation is that Nyquist sampling ignores the waveform. The minimal sampling interval preserves only the frequency and not the waveform. We must sample faster if we want the waveform.

Let us sample the same cosine function 100 times faster and compare.

Now that is more like it! So the looming question...

Error introduced by sampling is known as aliasing. Choosing too high is fine if you have the resources to support the extra computational overhead in processing unnecessary samples. Choosing too low or even at the wrong high interval may lead to aliasing, phase shifting and overall distortion.

In practice, I find that using sampling ten times faster than Nyquist is suitable for most applications. Let us look at how this unfolds in our cosine example.

We can see that minimum sampling must be exact when sampling according to sampling theorem and that greater values are rather unpredictable until we start sampling approximately ten times the minimum (10x Nyquist).